专利摘要:
METHOD FOR VIEWING A MEDICAL REPORT DESCRIBING RADIOLOGICAL IMAGES USING SELECTED DESCRIPTORS FROM A PRESET DESCRIPTIVE LIST, COMPUTER PROGRAM PRODUCT INCLUDING NON-TRANSITIONAL COMPUTER DATA STORED IN A TUTORABLE MEDIA. A report method and viewer for viewing a structured report, such as a medical report describing radiological images using descriptors selected from a predefined list of descriptors, includes the actions of opening a medical report; and in response to the opening action, looking for another report related to the medical report's descriptors, and highlighting words and / or sentences in the other report that corresponds to keywords derived from the descriptors. The medical report and the other report can be displayed simultaneously with the words and / or sentences being highlighted. The other report can include an unstructured text report, and the method also includes mapping the descriptors by findings to the text report and highlighting the findings.
公开号:BR112012026477B1
申请号:R112012026477-0
申请日:2011-03-29
公开日:2021-02-02
发明作者:Yuechen Qian;Merlijn Sevenster;Giselle Rebecca Isner
申请人:Koninklijke Philips N.V.;
IPC主号:
专利说明:

The present system refers, in general, to a report viewer and, more particularly, to an intelligent report viewer for viewing reports using a predetermined set of radiological descriptors, such as descriptors used in BIRADS (Breast Imaging Report and Data System), referred to as BIRADS descriptors used in medical imaging systems and a method of operating this.
Doctors (for example, radiologists and oncologists) are dealing with an increasing amount of information for diagnosing and treating patients optimally. Cancer patients, for example, often undergo imaging tests, and over time they have dozens of studies in their medical records. After obtaining an image of a breast, for example, using a mammogram or ultrasound, a radiologist looks at the image (s) and writes a report providing an opinion on the health of the breast shown in the image (s) using current images and previous reports, images or exams (exam) that are compared to current images. That is, each time doctors read a new exam (exam), they need to compare the current exam with the previous exams in order to determine the progress of previously identified injuries and to determine new injuries, if any. This task requires physicians to read, interpret, and correlate findings in both images and / or reports, including comparing current and previous images and / or reports, which is both time-consuming and clinically challenging in terms of workflow. .
Solutions have been proposed to help doctors perform such tasks easily. The American University of Radiologists has established standards for evaluating images using, for example, mammograms, which is called BIRADS. BIRADS is designed to document breast cancer studies in a structured way using standardized vocabulary. Systems have been developed to prepare radiology reports for breast cancer patients using BIRADS.
BIRADS is designed to document breast cancer studies in a structured way using standardized vocabulary. The Philips Healthcare ™ Integral Breast ™ product allows radiologists to record lesions according to BIRADS on the images and store the annotation in the database. The next time radiologists read the studies of recurring patients, they can view the previous images with the annotated findings without reading all the associated text reports. This significantly saves reading time.
FIG. 1 shows a report 100 using BIRADS for annotation on images that include several sections, such as section 110 of Patient Information, section 120 of Study Overview, section 130 of General Conclusion and section 14 0 of 'Discovery Details 1 'which includes the discovery BIRADS and descriptors, as selected from the menus and / or lists of a GUI 160 shown on the right side of FIG. 1. Patient Information section 110 includes patient identification and other relevant information, such as the name and ID number assigned to the patient; gender, date of birth and relevant patient history; and requesting the report. Section 120 of the Study Overview shows diagrams 122 of the front and side views of the patient's right and left breast with any lesions detected, as well as the name of the doctor or radiologist, study date and other scanning information shown in boxes 124 above of diagrams 122 and other data and findings in a box 126 below diagrams 122, such as the density of the breast noted as being extremely dense.
The General Conclusion section 130 includes notes and recommendations, where three box entries are shown, for example, where the first box 132 includes "large mass detected in the right breast. In addition, several calcifications are present in both breasts". The second box 134 includes a "General BI-RADS assessment category" indicated as "4A", where the BI-RADS 4 category indicates' Possibly Malignant ',' Suspected Abnormality 'and / or' Not characteristic of breast cancer, but reasonable probability of being malignant; biopsy should be considered ', for example. The third box 136 of the General Conclusion section 130 shown in FIG. 1 can indicate 'Follow-up recommendation: Target US (Ultrasound - Ultra Sound) targeted, if negative, then follow-up MRI.'
Discovery Details section 140 includes box 142 with the Discovery type and Mass information, such as BIRADS 4A rating category indicating 'Suspicious'. Below box 14 2 the breast diagrams 12 2 are shown again which also include added note (s) 144. A 'Location' box 146 is located below diagrams 122 and includes the identification information of the location of any mass and lesion identified in diagrams 122, such as, Laterality = Right; hourly position or region = 5 o'clock; Depth = Back. The next box below box 146 of 'Location' is entitled 'Mass Properties' and includes the BIRADS descriptions of the mass or lesion identified in diagrams 122, such as Shape = Lobular; Margin = Obscured; Density Modifier = High density. Other boxes can also be included, such as box 150 entitled 'Associated Discovery' with categories such as 'Skin Retraction', 'Post-surgical scar' etc. In addition, key X-ray images can be shown, such as 170 x-ray mammography images.
To use such systems optimally, previous studies need to be noted in the same way. Previous studies are studies that were diagnosed and documented as unstructured or free text reports, before the introduction of systems such as BIRADS. In practice, previous studies are not usually "re-annotated" due to issues of adequacy of quality, cost and lack of resources.
However, doctors need to review previous studies or reports and compare them with previous reports / studies. Manually comparing previous and current studies is time-consuming and prone to possible errors in noticing or missing the note of certain information necessary for proper comparison and diagnosis.
To truly benefit from using systems like BIRADS, doctors need to read effectively and use the previous unstructured free text reports and structured BIRADS findings at the same time. This requires a method that mediates between "new" structured data, where descriptors or findings are selected from a limited or predefined set of descriptors or findings and "old" plain text reports, also referred to as reports free text or unstructured text reports, where any words, descriptors or discoveries can be used without any restriction, that is, without being limited to any predefined or particular set of words, descriptors or discoveries. Thus, there is a need for physicians to read and effectively use previous unstructured free text reports and structured BIRADS findings at the same time. In addition, there is a need for mediation between the "new" structured data from BIRADS, for example, and the "old" plain text reports.
An objective of the present system, methods, devices and devices (here systems, unless the context indicates otherwise) is to overcome the disadvantages of conventional systems and devices, including assisting in the preparation of an appropriate study / report and diagnosis that takes into account studies relevant previous ones, for example, for comparison with current studies, such as, by suggesting previous studies that are relevant to the selected BIRADS annotation and highlighting fragments of sentences, or sentence group, relevant to the selected BIRADS annotation.
Illustrative achievements include a method, and a report viewer comprising a processor for performing the method and / or various mechanisms and modules, for viewing a structured report, such as a medical report describing radiological images using descriptors selected from a predefined list of descriptors, like BIRADS descriptors, it comprises the actions of opening the medical report; and in response to the opening action, seek, through a processor, another report reported to the medical report's descriptors, and highlight the words and / or sentences in the other report that matches keywords from the descriptors. The medical report and / or other report can be displayed simultaneously with the words and / or sentences being highlighted. The other report can be selected from a plurality of reports found through the search. In addition, the other report comprises an unstructured text report, the method also comprises mapping the descriptors to findings in the text report and highlighting the findings.
The opening action may include the action of selecting descriptors through a user, where search and highlighting actions are performed in response to the selection action. In addition, descriptors can be automatically extracted from the first report in response to the opening action. The medical report can include an image annotated with the descriptors, and the descriptors can be automatically extracted from the image in response to the opening action.
The search may include analysis through a report analyzer from the other report to obtain interpretations; translation through an ontology mechanism of the keywords descriptors; and to match the keywords with the interpretations through a correspondence and dissertation mechanism to identify interpretations that match the keywords. In addition, the analysis may include segmenting another report into sections; identification of sentences in sections; grouping words in sentences to form words grouped for each sentence; determination of the modality and laterality of each sentence of the grouped words; and mapping the modality and laterality of each sentence to words of laterality and modality to obtain the interpretations.
Translation through the ontology mechanism, for example, may include analyzing the descriptors to obtain a list of properties corresponding to the descriptors; associating each property with semantically relevant words using a mapper that accesses an ontology database; and retain the semantically relevant words to get the keywords.
Another embodiment includes a computer program product including non-transitory computer data stored on a computer-readable tangible medium, where the computer program product comprised program code configured to perform the one or more actions according to the methods for viewing a report generated and / or annotated using a limited or predefined set of descriptors, such as BIRADS descriptors.
These and other characteristics, aspects and advantages of the apparatus, systems and methods of the present invention will become better understood from the description below, attached claims and accompanying drawings where: FIG. 1 shows a conventional report using structured BIRADS data to annotate images; FIG. 2A shows views of the reports that are automatically correlated and simultaneously displayed according to an embodiment of the present system; FIG. 2B shows a BIRADS GUI shown in FIG. 2A in greater detail; FIG. 3 shows a block diagram illustrating a system with interaction flow between the components according to an embodiment of the present system; FIG. 4 shows a block diagram illustrating a system with interaction flow between the components according to another embodiment of the present system; FIG. 5 shows a block diagram illustrating a system with interaction flow between components according to another embodiment of the present system; FIG. 6 shows an illustrative example of a highlighted unstructured report where relevant parts are highlighted according to an embodiment of the present system; FIG. 7 shows an illustrative embodiment of an ontology mechanism shown in FIGs. 3-4 according to an embodiment of the present system; FIG. 8 shows an illustrative embodiment of a report analyzer shown in FIGs. 3-4 according to an embodiment of the present system; FIG. 9 shows a graphic illustrating the user interface according to an embodiment of the present system; and FIG. 10 shows a part of a system according to embodiments of the present system. The following are descriptions of the illustrative realizations that, when taken with the following drawings, will demonstrate the characteristics and advantages mentioned above, as well as others. In the following description, for purposes of explanation rather than limitation, illustrative details are established such as architecture, interfaces, techniques, element attributes, etc. However, it will be apparent to those skilled in the art that other achievements that deviate from these details would still be understood to be within the scope of the appended claims. In addition, for the sake of clarity, detailed descriptions of well-known devices, circuits, tools, techniques and methods are omitted in order not to obscure the description of the present system. It should be expressly understood that the drawings are included for illustrative purposes and do not represent the scope of the present system. In the accompanying drawings, similar numerical references in the different drawings may designate similar elements.
For purposes of simplifying a description of the present system, the terms "operatively coupled", "coupled" and formative thereof as used herein refer to a connection between devices and / or parts thereof that permit operation in accordance with the present system. For example, an operating coupling may include one or more couplings of a wired connection and / or wireless connection between two or more devices that allow one and / or two-way communication paths between and between the devices and / or parts of it . For example, an operating coupling can include a wired and / or wireless coupling to allow communication between a processor, memory, server and other devices, such as, analyzer, segmenter, mappers and / or stemmers.
The term production and training of this as used here refers to providing content, such as a digital medium that can include, for example, images annotated with descriptors, list of descriptors for selecting and annotating desired parts of images etc., so that it can be perceived by at least one sense of the user, such as a sense of sight and / or a sense of hearing. For example, the present system can produce a user interface on a display device so that it can be seen and interacted with a user. In addition, the present system can produce audio visual content on both a device that produces audible output (for example, a speaker, such as a speaker) and a device that produces visual output (for example, a display). To simplify the discussion below, the term content and formative content will be used and should be understood to include audio content, visual content, audiovisual content, textual content and / or other types of content, except a particular type of content is specifically intended , as can be readily perceived.
User interaction and manipulation of the computer environment can be achieved using any of a variety of types of human processor interface devices that are operationally coupled to a processor or processors controlling the displayed environment. A common interface device for a user interface (UI), such as a graphical user interface (GUI) is a mouse, trackball, keyboard, touch-sensitive display, a pointing device (for example, a pen) etc. For example, a mouse can be moved by a user in a flat workspace to move a visual object, such as a cursor, represented on a two-dimensional display surface in a direct mapping between the position of the user's manipulation and the position represented the cursor. This is typically known as position control, where the movement of the represented object directly correlates with the movement of the user's manipulation.
An example of such a GUI, according to the realizations of the present system, is a GUI that can be provided through a computer program that can be called by the user, in order to allow a user to select and / or classify / annotate such content. such as, for example, an annotated image with descriptors or textual content with highlighted parts.
FIG. 2A shows a view 200 of reports that are automatically correlated and simultaneously displayed according to an embodiment of the present systems and methods. As shown in FIG. 2A, in a current study 210 (also referred to as a reference study) shown on the right side of FIG. 2A, an injury to an image is or has been described using BIRADS 220 descriptors selected from a BIRADS GUI 23 0, where images can be annotated with BIRADS 220 descriptors using systems such as Integral Breast ™, or other systems, as described in an publication entitled "Complete Digital Textual and Iconic Annotation for Mammography" by Wittenberg, et al. , available on the network at sunsite.informatik.rwth- aachen.de/Publications/CEUR-WS/Vol-283/p091.pdf, which is incorporated here as a reference in its entirety, and is published in CEUR Workshop Proceedings, March 2007 , Munich, volume 283, title "Bildverarbeitung fur die Medizin", pages 91-95. FIG. 2B shows the BIRADS GUI 230 itself for better clarity.
The current or reference study 210 can be any desired or selected study including a BIRADS study, where descriptors are selected from a limited or predefined / predetermined set of descriptors. For simplicity, reference study 210 will be referred to as the BIRADS study, but it should be understood that reference study 210 can be any desired and / or selected study to be compared to different related and / or selected studies, referred to here as studies previous ones. On the left side of FIG. 2A, a text report 240 from a previous study on a previous system using unstructured or free / simple text is shown, where there are no restrictions or descriptors or words, and can be dictated by a radiologist reviewing the images, for example. FIG. 6 shows the text report 24 0 itself for clarity, where instead of the text corresponding to the BIRADS descriptors being enclosed in a box, the corresponding text is highlighted.
The previous study may not necessarily precede the current or reference study 210 and may be any study, such as a study with free or unstructured text. The present system automatically finds the previous study 240 that is relevant to the current study 210, and related parts of the previous study 240 that corresponds to the BIRADS descriptors for the BIRADS descriptors selected from the BIRADS GUI 230 and / or the images / current study that includes BIRADS descriptors. annotated. For example, the present system automatically finds a list of previous studies related to the breast exam of the same patient as a current breast study, ordered in any desired or selected order, such as by date. The automatic search or extraction can be narrowed to include attributes, such modality and / or laterality, which can be selected by the user or extracted automatically from the BIRADS descriptors. For example, when the modality is 'ultrasound' (US) and the laterality is 'right', then reports from the prior art that include US and right breast information (for example, text and / or images) for the particular patient are automatically extracted, for example, using image and / or text recognition algorithms or devices, where the most recent report of such reports can be considered as the most relevant prior art. Alternatively, or in addition, the previous study 240 can be selected by the user for correlation and highlighting of the relevant words, sentences and / or groups of words and sentences that are related or that correspond to the BIRADS descriptors that can be selected by the user or extracted automatically from from an image that is annotated with the BIRADS descriptors. It is also understood that the present system can also be used to compare studies from other patients with the current study.
Depending on the modality of the current study including annotations, for example, BIRADS annotations, of an injury to a picture describing the injury where such BIRADS annotations can be overlaid on the image, doctors need to know such reports from previous studies have the same modality as that of the current study / note. In the case of breast cancers, mammography using x-rays (MAM), ultrasound imaging (US), and magnetic resonance imaging (MRI) are modalities that are frequently used. Next, doctors need to open the reports of those previous studies including the same modality and read the content of the reports. Often the reports contain findings from multiple modalities. In the case of breast cancer reports, findings by mammography and ultrasound can be reported in the same document. Doctors need to find sentences where the injury annotated, annotated using the current study, for example, using BIRADS, was described in the previous study with the same modality. That is, if the current modality is US or ultrasound, and a previous report includes both texts related to MAM and US, then doctors need to find text related to US, not MAM. After finding previous reports related to the injury currently noted, doctors will compare the progress of the injury, in terms of size, shape, margin, density, etc.
As can be readily realized, performing such a task manually is time-consuming and prone to errors. The present systems and methods help physicians to perform the tasks mentioned above by finding suggested previous studies that are relevant to the selected BIRADS annotation, and highlighting fragments of sentences relevant to the selected BIRADS annotation. Thus, in the previous example, if the current report or study is a US study of a patient's right breast, then the relevant previous reports are found and suggested, where such previous reports are considered relevant if they include studies or study-related texts of US from the patient's right breast. In the case where a previous relevant report includes studies of both the patient's right and left breasts, using both US and MAM modalities, only parts of the previous report that are relevant to the current study are highlighted, namely, parts that are related to the US of the right breast. Thus, parts of the previous study related to the left breast, or related to MAM of the right breast are not highlighted, as long as these parts are not relevant to the current study, which is for a US study of the patient's right breast.
Reading individually a collection of text reports to identify sentences relevant to the selected note (for example, US, patient X's right breast) is challenging. To assist physicians in identifying relevant sentences, the present systems and methods automatically search for, suggest and / or provide related previous studies, such as the previous study 240 with the free or simple / unstructured text report shown on the right side of FIG . 2A, as well as automatically highlights sentences that are related to the annotation, for example, BIRADS annotation, from the current study 210 shown on the right side of FIG. 2A. In this realization, the relevant sentences from the previous selected study 230 are highlighted by circling the relevant sentences with a box (s) 250, for example. It should be understood that the highlight may include at least one of the change in the appearance of the content of the selected report such as word (s) and / or sentence (s), to be different from other parts of the report, or by painting and / or overlapping a colored background in the selected report or part (s) of the content, or circling the selected part (s) of the content through the border (s) or box (s), or any combination (s) thereof.
In addition to the automatic selection of the relevant previous study 230, the present system can select any desired study for comparison with a current or reference study, such as study 210 annotated in BIRADS, also referred to as BIRADS 210 study. The present system may include a processor or controller configured to display an interface, such as displaying a graphical user interface (GUI) on a display or monitor with a menu including a previous study list for user selection. The presented list of previous studies can be ordered using a selected criterion (s), such as date, type of exam, modality and / or relevance, or importance, as indicated in the previous study or indicated in associated metadata in the previous study. The keyword-based highlighting of documents can be found in many existing applications. Using Internet Explorer, for example, the user can search for keywords on a web page and the application can highlight all occurrences of the keywords. Rather than highlighting all occurrences of keywords, the present system maps or translates the selected BIRADS description of injuries to syntactically, semantically, and clinically relevant words that are then used to determine and highlight the relevant sentences and / or parts or groups of relevant words that are determined to be more related to selected BIRADS descriptions or BIRADS annotations added to an image. For example, if the image is that of the right breast, then all occurrences of 'breast' in free or unstructured text will not be highlighted. Instead, the highlight is limited to 'breast' associated with the right breast, where occurrences of 'breast' associated with the left breast will not be highlighted. In this way, the present system associated with a first descriptor with at least one other descriptor in the current report for relevant words or sentences highlighted and determined in the previous report. For example, 'breast' can be associated with 'right' or (laterality, right), so that all breast occurrences are not highlighted in the previous report, and only breast occurrences associated with 'right breast' are highlighted. Other descriptors can be associated with the first one described or 'breast' such as ultrasound or (modality, US), and thus, only occurrences of breast associated with the 'right breast' and ultrasound or US are highlighted, and occurrences in the previous report of the 'right breast' and ultrasound or x-ray mammography or MAM are not highlighted. In addition, it is often the case that the radiologist will explicitly mention only the location (for example, "right breast") of the lesion at the beginning of the sentence and will not repeat the location information in subsequent sentences. The present system designates that the following sentences refer to the same injury, and thus it should also be highlighted, as described in the present system.
FIG. 3 shows a block diagram illustrating a system 300 with the interaction flow between components of the system that are operationally coupled to each other according to realizations of the present system. As shown in FIG. 3, the system includes report analyzer 310, which can be a processing module of the natural semantic language that receives and transforms an unstructured radiology report 315 into a collection of interpretations 320 for output. Each interpretation 320 is associated with a modality attribute that indicates the type of image study (for example, x-ray such as a mammogram, ultrasound, MRI and other types of image) from a discovery has been identified, where the discovery can be given BIRADS descriptors and referred to as a discovery
BIRADS can be automatically found from the studied image or provided by a radiologist reading the image, for example. Each interpretation 320 has attributes that describe, if any, various aspects of an injury, for example, laterality, location, depth, shape and / or other attributes, as shown in the BIRADS 230 ultrasound GUI in FIG. 2B, for example. The system 300 further includes an ontology mechanism 330 that receives descriptors or discovery 335 that describe image content, for example, an injury to the image of a breast, such as BIRADS descriptors that are automatically generated from the analysis of the automatic image machine and / or provided by a radiologist reading the image. The ontology engine 330 translates at least one BIRADS descriptor into a list of related keywords 340 syntactically or semantically. For example, a BIRADS descriptor such as "Laterality" is translated into side, side, boundary, border, right, D, left, E and other synonyms and abbreviations.
As shown in FIG. 3, the system 300 also includes a correspondence and dissertation mechanism 350 operationally coupled to the report analyzer 310 and the ontology mechanism 330. The correspondence and dissertation mechanism 350 translates a BIRADS discovery, which can be a group of grouped BIRADS descriptors, for a set of search queues, and corresponds to search queues with interpretations 320 provided from report analyzer 310 to determine or identify corresponding or relevant interpretations 355.
For example, a BIRADS ultrasound (US) discovery may include descriptors such as (modality, US),), (Laterality, Right), (Location, 10 sharp). "US" is semantically mapped to words like "ultrasound", "echo", "onomatopoeic"; "right" for "right breast", "right side", "left armpit", "sharp 10" for "upper quadrant" and "upper internal quadrant". An interpretation of the second sentence in the following excerpt: "Bilateral breast ultrasound was performed. In position 10 at the point of the right breast, two contiguous cysts currently measuring an aggregate of 1.1 cm by 4.0 mm ...", can include (modality, US) and (laterality, right). This interpretation, namely (modality, US) and (laterality, right), corresponds to the modality, laterality and location descriptors of the BIRADS ultrasound discoveries and, therefore, the section and / or the second sentence of the section is highlighted. The correspondence and dissertation mechanism 350 is operationally coupled to a 360 user interface display mechanism. In particular, the correspondence and dissertation mechanism 350 generates the relevant identified or corresponding interpretations 355 for the 360 user interface display mechanism. The user interface display mechanism 360 provides its output 365 to a monitor 370 to display the report 315 with relevant (or corresponding) fragments of the report being highlighted according to the relevant interpretations identified 355 that correspond to the BIRADS discovery as determined by the mechanism correspondence and dissertation 350. As shown in FIGs. 3 and in FIG. 9, the previous highlighted report 240, 940 with unstructured or free text is displayed side by side with the BIRADS descriptors (for example, 950, 960 in FIG. 9) associated with the highlighted parts of the unstructured or previous report. The BIRADS descriptors shown simultaneously or simultaneously with the highlighted parts of the unstructured or previous report can be displayed as annotations added to the image described by the BIRADS descriptors and / or BIRADS findings. FIG. 4 shows another system 400 that is similar to the system 300 shown in FIG. 3, except that the system 400 of FIG. 4 includes searching, discovering and identifying multiple reports of unstructured text or free text 415, where report analyzer 310 generates interpretations 420 of multiple reports 415, and where system 400 of FIG. 4 also analyzes interpretations 420 and keywords 340 to find or identify relevant reports and interpretations 455 through the correspondence and dissertation mechanism 350 that corresponds to the BIRADS discovery. In this embodiment, output 465 of the 360 user interface display engine includes a list of reports with highlighted relevant interpretations. This output 465 with the list of reports is provided for a 470 user interface selection mechanism for user selection from the list or report. After selecting a report user, the selected report 475 is displayed on monitor 370. FIG. 5 shows another system 500 where descriptors or content attributes are automatically extracted from the selected content, for example, images, instead of being provided by the system user or radiologist, for example. In this illustrative realization, the relevant sentences from a previous report are displayed according to the modality / laterality attributes of DICOM images, for example, when BIRADS descriptors are not available. As is well known, Communications and Digital Imaging in Medicine (DICOM) is an industry standard for distributing and viewing medical images and other medical information between computers, as well as allowing digital communication between therapeutic and diagnostic equipment and systems from from different manufacturers.
The content attributes of DICOM images, for example, can be extracted from the content itself and / or from metadata associated with the image file. For example, image attributes can be automatically extracted using computer image and / or text analyzers and identifiers using computer vision and / or extraction and identification algorithms, for example, to detect and identify lesions in an image of a breast. Alternatively or additionally, such algorithms and / or computer vision can be used to automatically extract attributes through the detection, recognition and / or identification of annotations or texts added to the image.
FIG. 5 includes components similar to those shown in FIGs. 3-4. While FIG. 5 shows report analyzer 310 receiving a report 315 and providing interpretations 320, similar to FIG. 3, it is to be understood that report analyzer 310 can receive and analyze various reports 415 to produce interpretations 420 of the plurality of reports 451, as described in the connections to FIG. 4. As shown in FIG. 5, an attribute extractor such as a DICOM 530 attribute extractor is configured to extract attribute from selected DICOM files or 535 images, such as from metadata or from the content itself, such as using image analysis methods including computer vision, for example.
The DICOM attribute extractor 530 extracts and provides attributes 340 for the correspondence and dissertation mechanism 350. For example, the attributes or keywords extracted 340 can be modality, laterality and other attributes of the file or DICOM content of the DICOM file, such as description and location of lesions in an image of a breast included in the DICOM file. Similar to system 300 of FIG. 3, the correspondence and dissertation mechanism 350 compares interpretations 320 of report analyzer 310 with extracted attributes 340, and identifies relevant interpretations 355 (from report 315) that corresponds to attributes or keywords extracted 340. Relevant report interpretations or corresponding 355 are provided for the 360 user interface display mechanism that highlights relevant interpretations and provides report 365 with relevant interpretations highlighted for monitor 370 for display.
FIG. 6 shows an illustrative example of a detached unstructured report 600, similar to the unstructured report 240 of FIG. 2A, where the relevant parts corresponding to the BIRADS findings / descriptors are highlighted (instead of being surrounded by the box 250 shown in FIG. 2A). In the example shown in FIG. 6, the BIRADS discovery includes an ultrasound finding with a circumscribed lesion on the right breast in the upper right outer quadrant. This realization of the present system highlights the sentences in the unstructured report 600, which are relevant prominent and ultrasound-specific keywords that correspond to the BIRADS descriptors / discoveries. Notably, the sentences describing mammographic findings are not highlighted; the lesion is semantically mapped to "nodular density" and highlighted; ultrasonic findings of the left breast are not highlighted.
FIG. 7 shows an illustrative embodiment of the ontology mechanism 330 shown in FIGs. 3-4. As shown in FIG. 7, the ontology engine 33 0 includes an analyzer or analysis module 710 that receives BIRADS discoveries 335 and analyzes them in the descriptors including text strips with a list of properties 715 corresponding to the descriptors. Examples of properties are "Laterality: Left Breast", "Location: upper inner quadrant", "Format: Irregular", "Margin: not circumscribed" and other BIRADS descriptors. The analyzed BIRADS findings, descriptors or properties are provided for a mapper or mapping module 72 0 that accesses an ontology database 730, which can be a remotely located ontology server, but operationally coupled to the 720 mapper, or can be given stored in a local memory of the present system and operationally coupled to the 720 mapper. The ontology database 730 includes words, synonyms and abbreviation and the like. Mapper 720 associates each property 715 of the BIRADS discoveries 335 with semantically relevant words 735 that correspond to property 715, including synonyms and / or abbreviation, for example. The relevant words 735 are retained through a stemmer to reduce the relevant words 735 to their derivations, bases or roots, such as removing the end of the words to obtain the root of the words. For example, stemmer 740 identifies the root 'search' of words as 'searching' or 'searcher'. Other examples include stemming "heterogeneously" to obtain the "heterogeneous" derivation and retaining the "shading", which can be a value of the "posterior acoustic" property of an ultrasound discovery, to obtain the "shadow" root. The stemmer 740 produces derivations also referred to as keywords 340, for example, vectors of the derived properties with semantic associations for the correspondence and dissertation mechanism 350 shown in FIGs. 3-4.
FIG. 8 shows an illustrative embodiment of the report analyzer 310 shown in FIGs. 3-4. As shown in FIG. 8, report analyzer 310 includes a standardization or segmentation module 810 that receives unstructured or free text reports 315 and segments reports into segmented sections 815 (such as a header, history, procedure, findings and impressions), paragraphs and sentences. Segmented sections 815 are provided for a sentence segmentation module 820 that identifies sentences in segmented sections 815. For each segmented section 815, the sentence segmenter module 820 provides a list of sentences per section 825 for an analyzer or analysis module 830.
The analyzer 830 uses syntactic rules stored in a memory or database 84 0 for syntactic processing and analyzing the received sentences 825 in words and sentences, and to group the words of a sentence into syntactic structures 835. For example, the syntactic rules are used to describe the syntax of a given language such as English where, according to syntactic rules, where a processor is configured to analyze, divide and / or separate a natural language sentence is in its constituent parts and / or categories, such as categories N, V, P, and the words of a sentence are assigned to categories N, V, P. An example of using a rule to group words and map the group to syntactic structures includes NVNP that describes that sentence includes a noun phrase N, a V verb, another noun phrase N, and a preposition phrase P. The noun phrases are used in a semantic mapping 850, where nouns are c answered against medical ontology 860, to determine whether the noun is a medical term or not, and to determine its semantic category (eg anatomy, disease, procedure, etc.)
The semantic mapping 850 determines the report's modalities from the study's DICOM data that represents the image associated with the report, for example.
If the infrastructure does not allow access to this data, then the modalities (for example, MAM / US / RM) can be determined or inferred from the information in the report header. Typically the header includes the following: "R MAM BREAST ULTRASOUND", indicating that the file or report includes both x-ray mammography and ultrasound image information of the right breast, or "R MAN UNILAT DIGITAL DIAGNOSTIC" indicating a information from the x-ray mammography of the right breast, for example.
A semantic mapping module 850 receives the syntactic structures 835 from the analyzer 830 and maps each modality to a selection of the modality-specific keywords that are used to discover starting points of the discussions. A discussion includes consecutive sentences that are all related to a specific modality. The modality is assigned to all sentences in a discussion. In particular, the semantic mapping module 850 uses a medical ontology stored in a memory 860, which includes medical terms, to detect medical terms for each syntactic structure received from the 830 analyzer including mapping the modality by modality-specific keywords .
Output 855 of the semantics mapping module 850 is operationally coupled to a filter module 870 and includes medical terms detected from each syntactic structure. These detected medical terms 855 are filtered through the filter module 870 to provide the list of interpretations 320 for the correspondence and dissertation mechanism 350 shown in FIGs. 3-4. The report analyzer 319 can also be referred to as a natural language processing module that is used to interpret sentences; Interpretations 320 are extracted which often include the problem detected from the image, body location, modifiers, etc. For example, the Medical Language Coding and Extraction System (MEDLEE) can be used to generate such an interpretation. In addition, the modality keywords derived through the semantic mapping module 850 are assigned to interpretations 320 through the natural language processing module or report analyzer 310.
Returning to FIG. 7, each BIRADS descriptor, typically a pair of text strips, included in the BIRADS 335 discovery is translated into keywords syntactically and semantically using the ontology server 730. The value and name of the descriptor or property of a selected BIRADS descriptor are the first retained by stemmer 730. For example, a mammography finding may include a list of descriptors. A description describes an aspect of the injury. For example, "Laterality" can be the name of the descriptor, and its value can be "left", "right", "both" and "none". This descriptor (ie "Laterality") describes the side of the breast on which the lesion is located. Another frequently used descriptor includes "Depth", "Location" and "Calcification characteristic", for example. Derivations are used as the first class of keywords. For example, a conventional Porter stemmer can be used. Then, the values and name of the property are semantically mapped using the 720 mapping module.
As an illustrative example, the location of an injury is typically characterized by the quadrant and time position. Using the ontology mechanism 330, positions 9 to 12 o'clock on the left breast are mapped to the upper internal quadrant of the left breast and positions 9 to 12 o'clock on the right breast of the upper external quadrant of the right breast. Still, "mass" is mapped to "injury", which is a more general concept using the Unified Medical Language System / Systematized Nomenclature of Medical ontology (UMLS / SNOMED ™). All derived keywords are stemmed.
To summarize, the following actions are taken to map a BIRADS annotation to findings in the text reports. The BIRADS annotation can be provided or added to an image (for example, of a breast) automatically or manually by a radiologist reading the image, for example. 1. Using the report analyzer or natural language processing module 310, reports 315 are segmented by segmentator 820 into sections (header, history, procedure, findings and impressions), paragraphs and sentences. The modalities of the 315 report can be obtained from the study's DICOM data. If the IT infrastructure does not allow access to these data, the modalities (MAM / US / RM) are determined from the information in the report header. Typically the header appears as follows: "R MAM BREAST ULTRASOUND", "R MAM UNILAT DIGITAL DIAGNOSTIC". 2. Each modality is mapped by the semantic mapping module 850 for a selection of the modality-specific keywords that are used to discover the starting points of the discussions, where the discussion includes consecutive sentences that are all related to a specific modality. The modality is assigned to all sentences in a discussion. 3. The natural language processing module 310 is used to interpret sentences. Interpretations are extracted that often include descriptions of the problem, body location, modifiers. MEDLEE in this case can be used to generate such an interpretation. In addition, the modality information derived in step 2 is attributed to the interpretation. 4. Each BIRADS descriptor, typically a pair of text strips, is translated into keywords syntactically and semantically using the ontology server 330. The BIRADS discovery and / or descriptors are analyzed by analyzer 710 to obtain a list of descriptor properties BIRADS. The name and values of each property are mapped semantically as with mapper 720 to obtain semantically relevant words such as synonyms and / or abbreviation associated with each property. The value and property name of a selected BIRADS descriptor is then retained by stemmer 740. Derivations are used as the first class of keywords. The Porter stemmer can be used here, for example. If desired, multiple mappings and / or retentions can be performed in which, for example, the value and property name of a selected BIRADS descriptor is first mapped semantically through a mapper to find relevant words, and then the words are retained by a stemmer. Next, the determined or found relevant words or semantic associations are retained to obtain a vector (s) with the retained semantic association. The. The location of an injury is typically characterized by the hourly and quadrant position. Using the ontology mechanism, positions 9 to 12 o'clock on the left breast are mapped to the upper inner quadrant of the left breast and positions 9 to 12 o'clock on the right breast of the upper outer quadrant of the right breast. B. The "mass" is mapped to "injury", a more general concept using, for example, UMLS / SNOMED ontology. ç. All derived keywords are retained. 5. When the user selects a BIRADS discovery that can comprise a multiple of BIRADS descriptors, the system evaluates a numerical relevance score indicating how much each sentence in each discussion corresponds to the selected BIRADS discovery. The. A straightforward way is to compute the number of occurrences of the withholdings derived in Step 4 in one sentence. The higher the number, the more relevant the sentence is. B. A more clinically relevant approach is to match a sentence to a BIRADS discovery according to the interpretations. In this approach, an interpretation is treated as a feature vector in the same BIRADS space as a BIRADS discovery. For example, the interpretation of a sentence can be modeled as a vector of properties including modality, laterality, location, margin and so on, in the same way as a BIRADS discovery. To compute the numerical relevance score of a sentence in relation to a BIRADS discovery, a property in the interpretation vector corresponds to that of the BIRADS discovery: if the property values are the same, the corresponding score for this property is 1; otherwise 0. In addition, a weight is assigned to the property: the more important the property is for the current clinical context, the higher the weight. The relevance score of a sentence in relation to a BIRADS discovery is the sum of the multiplication of the correspondence score and weight of the properties. 6. A GUI component highlights the sentence (s) with the highest relevance score. FIG. 9 illustrates a GUI 900 that includes, side by side, a BIRADS GUI 910 for selection of the BIRADS descriptor (s) and / or discovery (s), and a previous report 940 that can be selected by the user or extracted automatically from from previous reports as described.
FIG. 9 illustrates a GUI 900 showing that when selecting a BIRADS discovery (s) in the system from a BIRADS GUI 910, the relevant text in a previous text report 940 is highlighted and shown side by side next to the current reference study , first or structured 910 that includes BIRADS descriptors with the highlighted parts of the unstructured or previous report. Thus, in response to the selection of a BIRADS discovery including selected BIRADS or extracted descriptors such as 'US' 950 and 'Right' 960 laterality, the relevant text in a previous text report is automatically highlighted. In FIG. 9, the BIRADS annotation of a lesion on a 'US' ultrasound image is included in table 'Ml' 965, and includes a list of descriptor menus such as 'Anatomical position of the lesion' 970, 'Mass characteristics' 972, 'Surrounding tissue' 974, 'Calcification' 976, 'Vascularity' 978, 'Measurements' 980 and 'Assessment for this discovery' 982. After selecting a menu, other descriptors are displayed for user selection, such as a master's degree for the menu 'Anatomical position of the lesion' 982, where descriptors such as 'Laterality' 984, 'Depth' 986 and 'Location' 988 are provided for another user selection of the descriptors, where 'Right' 960 is selected from another menu 900. As shown in FIG. 9, different parts of the previous report 940 can be highlighted differently. For example, the discovery with the same laterality is highlighted using a first color such as a foreground color, shown in FIG. 9 through boxes 992, 994 which includes 'right breast' shown in FIG. 9 as underlined by dotted lines, and an ultrasound finding is poorly highlighted using a second color, such as a background color that is different from the first color, shown in FIG. 9 through a box 996 that includes 'ultrasound' shown in FIG. 9 as underlined by the dotted lines.
FIG. 10 shows a part of a display system 1000 in accordance with embodiments of the present system. For example, a part of the present system may include a 1010 processor operationally coupled to a 1020 memory, a 1030 display and a 1040 user input device. The 1020 memory can be any type of device for storing application data as well as other data. related to the described operations. The application data and other data are received by the 1010 processor to configure (for example, program) the 1010 processor to perform the operation actions according to the present system. The processor 1010 thus configured becomes a special purpose machine particularly suitable for making according to the present system.
The operation's actions may include requesting, selecting, providing and / or producing the content such as displaying annotated images with structured descriptors, for example, BIRADS descriptors and / or unstructured or free text reports. The 1040 user input can include a keyboard, mouse, trackball or other device, including touch sensitive displays, which can be standalone or be a part of a system, such as part of a personal computer, personal digital assistant, mobile phone, set top box, television or other device to communicate with the 1010 processor via any operable link. The user input device 1040 can be operable to interact with the processor 1010 including allowing interaction within a UI as described here. Clearly the processor 1010, memory 1020, display 1030 and / or user input device 1040 can all or partly be a part of a computer system or other device such as a client and / or server as described here.
The methods of the present system are particularly suitable to be carried out by a computer software program, such modules containing a corresponding program for one or more of the individual steps or actions described and / or envisaged by the present system. Such a program can, of course, be incorporated into a computer-readable medium, such as an integrated chip, a peripheral device or memory, such as memory 1020 or other memory coupled to the 1010 processor. For example, the various components of the present system , such as the report analyzer, ontology engine, the correspondence and dissertation engine, UI and display engine, as well as the analyzer, mapper and stemmers can be software modules by the 1010 processor and / or hardware devices configured to perform the desired functions. In this way, the components and modules described in the present system can be implemented in software, hardware, firmware, some combination of software, hardware and / or firmware; and / or implemented in another way. The modules illustrated in FIG. 3-5 and 7-8 can be placed within a single processing unit. The processor 1010 can include multiple processing units, and some of these processing units can be located remotely from one another, where the various modules can be located remotely from other modules and operative communication between the modules can be achieved through one or more wired and / or wireless communication links. The program and / or parts of the program contained in memory 1020 configure the processor 1010 to implement the methods, operational actions, and functions disclosed here. The memories can be distributed, for example, between clients and / or servers or location and the 1010 processor, where additional processors can be provided, can also be distributed or can be unique. The memories can be implemented as electrical, magnetic or optical memory, or any combination of these and other types of storage devices. In addition, the term "memory" must be interpreted broadly enough to encompass any information capable of being read or written to an address in an addressable space accessible by the 1010 processor. With this definition, information accessible via a 1250 network and / or a server, are still in memory, for example, due to the 1010 processor it can retrieve information from the network for operation according to the present system, as well as the various databases that may be on servers, such as the database of ontology or server 730, the server or database of syntactic rules 84 0 and / or the database of medical ontology or server 860. Processor 1010 is operable to provide control signals and / or operations in response to input signals from user input device 1040 as well as in response to other network devices and executing instructions stored in memory 1020. Processor 1010 can be an integrated circuit (s) for general use or specific application. In addition, the 1010 processor may be a dedicated processor to perform in accordance with the present system or it may be a general purpose processor in which only one of the many functions operates to perform in accordance with the present system. The processor 1010 can operate using a part of the program, multiple program segments, or it can be a hardware device using a multi-purpose or dedicated integrated circuit.
Although the present system has been described with reference to a medical system, for example, MAM / US / RM imaging system, it is also envisaged that the present system can be extended to other imaging, visualization, reporting systems or analysis and the like. In this way, the present system can be used to automatically find relevant free text reports and highlighted words and sentences related to structured descriptors selected by a user or automatically extracted from images that are annotated with such descriptors where all occurrences of a word are not highlighted, and only relevant occurrences are highlighted, as described.
Certain additional advantages and features of this invention may be apparent to those skilled in the art after studying this disclosure, or can be experienced by people employing the new system and method of the present invention, which is a faster, easier and more correlated reliable among several reports having annotations or structured and / or structured texts, for example. Other variations of the present invention would readily occur to a person skilled in the art and are encompassed by the following claims. Through the operation of the present system, an automatic correlation is provided among different reports related to a common image, such as an image of a breast under examination in which selecting selected descriptors, such as BIRADS descriptors, automatically results in finding relevant reports and words and highlight relevant sentences in the reports found or obtained. Additionally or alternatively, opening or selecting a report that includes descriptors that describe, an image and / or opening an image that includes annotated descriptors, automatically or in response to user action, such as a 'find' command, results in the search and find of relevant reports related to the selected image or report, as well as highlighting relevant sentences and words in the reports found that are related to the selected report meeting the descriptors.
Of course, it must be realized that any of the above achievements or processes can be combined with one or more other achievements and / or processes or be separated and / or carried out between separate devices or parts of the device in accordance with the present systems, devices and methods. Finally, the above discussion is intended to be merely illustrative of the present systems and methods, and should not be construed as limiting the appended claims to any particular achievement or group of achievements. Thus, while the present system has been described in particular details with reference to exemplary achievements, it must also be realized that the numerous modifications and alternative achievements can be diverted by the technicians in the subject without departing from the broader and intended scope and spirit of the present system as set forth in the claims that follow. Accordingly, the specification and drawings must be considered in an illustrative manner and are not intended to limit the scope of the attached claims.
In the interpretation of the attached claims, it should be understood that: a) the word "understand" does not exclude the presence of elements or actions other than those listed in a given claim; b) the word "one" or "one" preceding an element does not exclude the presence of a plurality of such elements; c) any reference signs in the claims do not limit its scope; d) several "means" can be represented by the same item or by the same structure or function implemented in software or hardware; e) any of the elements disclosed may be comprised of the hardware parts (for example, including discrete and integrated electronic circuit), software parts (for example, computer programs), and any combination thereof; f) parts of the hardware can be comprised of one or both digital and analog parts; g) any of the disclosed devices or parts thereof may be combined or separated into other parts unless specifically stated otherwise; h) no specific sequence of actions or steps is intended to be necessary including an order of actions represented in the flow diagrams unless specifically indicated; and i) the term "plurality of" an element includes two or more of the claimed element, and does not imply any particular range or number of elements; that is, a plurality of elements can be as little as two elements, and can include an immeasurable number of elements.
权利要求:
Claims (8)
[0001]
1. METHOD FOR VIEWING A MEDICAL REPORT DESCRIBING RADIOLOGICAL IMAGES and opening a structured medical report (100) describing one or more radiological images using descriptors selected from a predefined list of descriptors, characterized by the descriptors of the structured medical report (100) describing an injury and at least one descriptor includes an image modality of one or more radiological images that include the annotated lesion, and in response to the opening action, perform the following actions: search, through a processor (1010), for an additional unstructured report (315, 415, 600) from a previous study (240, 940) related to the descriptors of the structured medical report (100) for the same patient, in which the search will correspond to keywords (340) translated with an ontology of the descriptors ( 220) of the structured medical report (100) and an interpretation (355, 455) of one or more interpretations (320, 420) obtained from free text d the unstructured additional report (315, 415, 600), where the free text of the unstructured additional report (315, 415, 600) includes phrases and words in the sentences, where the interpretation (355) of one or more interpretations ( 320, 420) includes a plurality of attributes (340) obtained from the free text and describes another lesion, in which an attribute of the plurality of attributes (340) includes a modality of image of the image that includes the other lesion noted in the report additional unstructured (315, 415), in which the search corresponds to the correspondence of the image modality of the single attribute (340) of the other annotated lesion obtained from the free text of the unstructured report (315, 415, 600) with the keyword image modality (340) translated from at least one structured report descriptor (100); and highlight in the free text of the additional unstructured report (315, 415, 600) from a previous study (240, 290), at least one of the words and sentences selected from a group consisting of words in the sentences and the sentences of one or more interpretations (320, 420) corresponding to keywords derived from the descriptors used in the structured medical report (100).
[0002]
2. METHOD, according to claim 1, characterized by the predefined list of descriptors including descriptors standardized by the American University of Radiology, and the descriptions identify an anatomical location of an annotated lesion, in which a keyword (340) translated from of the descriptors includes a laterality of the identified anatomical location; in which the free text used to obtain the plurality of attributes (340) that describe the other lesion noted in the free text of the additional unstructured report (315, 415, 600) from a previous study (240, 940) excludes the descriptors, in that the plurality of attributes (340) comprises the modality of image and a laterality; in which the correspondence comprises the laterality of the plurality of attributes (340) of the other annotated lesion, obtained from the free text of the unstructured report (315, 415, 600), excluding the descriptors with the laterality of the keyword (340 ) translated from the structured report's descriptors and corresponding to the image modality of the at least one attribute (340) of the other annotated lesion, obtained from the free text of the unstructured report (315, 415, 600), with the image modality of the keyword (340) translated from the descriptors of the structured report.
[0003]
3. METHOD, according to claim 1, characterized by the search through a processor of an additional unstructured report (315, 415, 600) of a previous study (240, 940) related to the descriptors of the structured medical report (100) for the same patient to include: determining a plurality of imaging modalities for the additional unstructured report (315, 415, 600) from headings in the free text of the additional unstructured report (315, 415, 600); determine an image modality selected from the plurality of image modalities for at least one sentence of the free text in the additional unstructured report (315, 415, 600); and map words comprising at least one sentence of the free text in the additional unstructured report (315, 415, 600) to at least a second of the plurality of interpretation attributes (355, 455) identified by the determined image modality.
[0004]
4. METHOD, according to claim 3, characterized in that at least the second of the plurality of attributes (340) of an interpretation (355, 455) identified by the determined image modality comprises laterality.
[0005]
5. METHOD, according to claim 1, characterized by the free text of the unstructured report (315, 415, 600) comprising a plurality of medical imaging modalities; and in which the research differentiates the phrases in the free text between the plurality of medical image modalities and the research uses phrases in the free text that correspond to the image modality of the other annotated lesion to obtain a second attribute of the plurality of attributes (340), the latter which excludes the image modality of the other lesion noted.
[0006]
6. METHOD, according to claim 1, 5 characterized by also comprising the action of simultaneously displaying the structured medical report (100) and the free text highlighted from the additional unstructured report (315, 415, 600) from a previous study ( 240, 940) from the same patient.
[0007]
7. METHOD, according to claim 1, characterized by the descriptors being automatically extracted from the structured medical report (100) in response to the opening action.
[0008]
8. METHOD according to claim 1, 15 characterized in that the structured medical report (100) includes an annotated image with the used descriptors, and the used descriptors are automatically extracted from the image in response to the opening action.
类似技术:
公开号 | 公开日 | 专利标题
BR112012026477B1|2021-02-02|method for viewing a medical report describing X-ray images
CN105940401B|2020-02-14|System and method for providing executable annotations
US8744149B2|2014-06-03|Medical image processing apparatus and method and computer-readable recording medium for image data from multiple viewpoints
US20130024208A1|2013-01-24|Advanced Multimedia Structured Reporting
JP6914839B2|2021-08-04|Report content context generation for radiation reports
US10276265B2|2019-04-30|Automated anatomically-based reporting of medical images via image annotation
JP6461909B2|2019-01-30|Context-driven overview view of radiation findings
RU2711305C2|2020-01-16|Binding report/image
US10729396B2|2020-08-04|Tracking anatomical findings within medical images
WO2015092633A1|2015-06-25|Automatic creation of a finding centric longitudinal view of patient findings
JP2014039852A|2014-03-06|Information processor, information processing method and program
US20180060488A1|2018-03-01|Customizing annotations on medical images
EP2656243B1|2019-06-26|Generation of pictorial reporting diagrams of lesions in anatomical structures
JP2019033924A|2019-03-07|Learning data generation support device, method for operating learning data generation support device and learning data generation support program
US10282516B2|2019-05-07|Medical imaging reference retrieval
WO2019193983A1|2019-10-10|Medical document display control device, medical document display control method, and medical document display control program
US20190325249A1|2019-10-24|System and method for automatic detection of key images
WO2021162008A1|2021-08-19|Document creation assistance device, document creation assistance method, and program
Seifert et al.2014|Intelligent healthcare applications
WO2021157718A1|2021-08-12|Document creation assistance device, document creation assistance method, and program
JP2020520490A|2020-07-09|System and method for computer-aided retrieval of image slices for signs of findings
WO2020165130A1|2020-08-20|Mapping pathology and radiology entities
CN113329684A|2021-08-31|Comment support device, comment support method, and comment support program
同族专利:
公开号 | 公开日
JP5744182B2|2015-07-01|
BR112012026477A2|2016-08-09|
EP2561458A2|2013-02-27|
WO2011132097A2|2011-10-27|
CN102844761A|2012-12-26|
US20140149407A1|2014-05-29|
WO2011132097A3|2012-01-12|
CN102844761B|2016-08-03|
US10762168B2|2020-09-01|
EP2561458B1|2021-07-21|
JP2013525898A|2013-06-20|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US6266435B1|1993-09-29|2001-07-24|Shih-Ping Wang|Computer-aided diagnosis method and system|
US5729620A|1993-09-29|1998-03-17|Wang; Shih-Ping|Computer-aided diagnosis system and method|
US6785410B2|1999-08-09|2004-08-31|Wake Forest University Health Sciences|Image reporting method and system|
US20050096530A1|2003-10-29|2005-05-05|Confirma, Inc.|Apparatus and method for customized report viewer|
US20050273365A1|2004-06-04|2005-12-08|Agfa Corporation|Generalized approach to structured medical reporting|
US20060136259A1|2004-12-17|2006-06-22|General Electric Company|Multi-dimensional analysis of medical data|
US7616793B2|2004-12-30|2009-11-10|Hologic, Inc.|Medical image review workstation with integrated content-based resource retrieval|
US20060242143A1|2005-02-17|2006-10-26|Esham Matthew P|System for processing medical image representative data from multiple clinical imaging devices|
JP5128154B2|2006-04-10|2013-01-23|富士フイルム株式会社|Report creation support apparatus, report creation support method, and program thereof|
JP4767759B2|2006-06-02|2011-09-07|富士フイルム株式会社|Interpretation report creation device|
US10796390B2|2006-07-03|2020-10-06|3M Innovative Properties Company|System and method for medical coding of vascular interventional radiology procedures|
JP5098253B2|2006-08-25|2012-12-12|コニカミノルタエムジー株式会社|Database system, program, and report search method|
JP2008253551A|2007-04-05|2008-10-23|Toshiba Corp|Image reading report search apparatus|
JP5264136B2|2007-09-27|2013-08-14|キヤノン株式会社|MEDICAL DIAGNOSIS SUPPORT DEVICE, ITS CONTROL METHOD, COMPUTER PROGRAM, AND STORAGE MEDIUM|
US20100076780A1|2008-09-23|2010-03-25|General Electric Company, A New York Corporation|Methods and apparatus to organize patient medical histories|
US20100145720A1|2008-12-05|2010-06-10|Bruce Reiner|Method of extracting real-time structured data and performing data analysis and decision support in medical reporting|
US8600772B2|2009-05-28|2013-12-03|3M Innovative Properties Company|Systems and methods for interfacing with healthcare organization coding system|
US10762168B2|2010-04-19|2020-09-01|Koninklijke Philips N.V.|Report viewer using radiological descriptors|US10762168B2|2010-04-19|2020-09-01|Koninklijke Philips N.V.|Report viewer using radiological descriptors|
EP2487602A3|2011-02-11|2013-01-16|Siemens Aktiengesellschaft|Assignment of measurement data to information data|
US9177110B1|2011-06-24|2015-11-03|D.R. Systems, Inc.|Automated report generation|
KR101520613B1|2012-02-06|2015-05-15|삼성메디슨 주식회사|Method and apparatus for providing ulrtasound image data|
WO2013181638A2|2012-05-31|2013-12-05|Ikonopedia, Inc.|Image based analytical systems and processes|
CN103530491B|2012-07-06|2017-06-30|佳能株式会社|Apparatus and method for generating audit report|
RU2640009C2|2012-08-22|2017-12-25|Конинклейке Филипс Н.В.|Automatic detection and extraction of previous annotations, relevant for imaging study, for efficient viewing and reporting|
EP2904589B1|2012-10-01|2020-12-09|Koninklijke Philips N.V.|Medical image navigation|
KR102043130B1|2012-11-16|2019-11-11|삼성전자주식회사|The method and apparatus for computer aided diagnosis|
WO2014081867A2|2012-11-20|2014-05-30|Ikonopedia, Inc.|Secure data transmission|
US9904966B2|2013-03-14|2018-02-27|Koninklijke Philips N.V.|Using image references in radiology reports to support report-to-image navigation|
WO2014155273A1|2013-03-29|2014-10-02|Koninklijke Philips N.V.|A context driven summary view of radiology findings|
US11183300B2|2013-06-05|2021-11-23|Nuance Communications, Inc.|Methods and apparatus for providing guidance to medical professionals|
US20140365239A1|2013-06-05|2014-12-11|Nuance Communications, Inc.|Methods and apparatus for facilitating guideline compliance|
US9542481B2|2013-06-21|2017-01-10|Virtual Radiologic Corporation|Radiology data processing and standardization techniques|
WO2015044810A2|2013-09-27|2015-04-02|Koninklijke Philips N.V.|A system for assisting the transcription of lesion measurements|
US9953542B2|2013-10-31|2018-04-24|Dexcom, Inc.|Adaptive interface for continuous monitoring devices|
WO2015079354A1|2013-11-26|2015-06-04|Koninklijke Philips N.V.|System and method of determining missing interval change information in radiology reports|
WO2015079373A1|2013-11-26|2015-06-04|Koninklijke Philips N.V.|Automatically setting window width/level based on referenced image context in radiology report|
US10474742B2|2013-12-20|2019-11-12|Koninklijke Philips N.V.|Automatic creation of a finding centric longitudinal view of patient findings|
CN105940401B|2014-01-30|2020-02-14|皇家飞利浦有限公司|System and method for providing executable annotations|
EP3132367A1|2014-04-17|2017-02-22|Koninklijke Philips N.V.|Method and system for visualization of patient history|
US10586618B2|2014-05-07|2020-03-10|Lifetrack Medical Systems Private Ltd.|Characterizing states of subject|
US9779505B2|2014-09-30|2017-10-03|Toshiba Medical Systems Corporation|Medical data processing apparatus and method|
US10650267B2|2015-01-20|2020-05-12|Canon Medical Systems Corporation|Medical image processing apparatus|
EP3254211A1|2015-02-05|2017-12-13|Koninklijke Philips N.V.|Contextual creation of report content for radiology reporting|
JP6826039B2|2015-02-05|2021-02-03|コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V.|Communication system for dynamic checklists to support radiation reporting|
US9842390B2|2015-02-06|2017-12-12|International Business Machines Corporation|Automatic ground truth generation for medical image collections|
JP6748094B2|2015-03-09|2020-08-26|コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V.|Building an episode of computer-aided care|
CN107750383A|2015-06-12|2018-03-02|皇家飞利浦有限公司|For the devices, systems, and methods for the timeline for showing semantic classification|
JP6748199B2|2015-10-02|2020-08-26|コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V.|System for mapping findings to relevant echocardiographic loops|
US10140273B2|2016-01-19|2018-11-27|International Business Machines Corporation|List manipulation in natural language processing|
US20170221204A1|2016-01-28|2017-08-03|Siemens Medical Solutions Usa, Inc.|Overlay Of Findings On Image Data|
US9984772B2|2016-04-07|2018-05-29|Siemens Healthcare Gmbh|Image analytics question answering|
WO2017174591A1|2016-04-08|2017-10-12|Koninklijke Philips N.V.|Automated contextual determination of icd code relevance for ranking and efficient consumption|
US10978190B2|2016-06-16|2021-04-13|Koninklijke Philips N.V.|System and method for viewing medical image|
US10276265B2|2016-08-31|2019-04-30|International Business Machines Corporation|Automated anatomically-based reporting of medical images via image annotation|
US20180060534A1|2016-08-31|2018-03-01|International Business Machines Corporation|Verifying annotations on medical images using stored rules|
US10729396B2|2016-08-31|2020-08-04|International Business Machines Corporation|Tracking anatomical findings within medical images|
US20190287665A1|2016-10-06|2019-09-19|Koninklijke Philips N.V.|Template-based medical summary interface generation system|
CN109952613A|2016-10-14|2019-06-28|皇家飞利浦有限公司|The system and method for relevant previous radiation research are determined for using PACS journal file|
EP3613053A1|2017-04-18|2020-02-26|Koninklijke Philips N.V.|Holistic patient radiology viewer|
USD855651S1|2017-05-12|2019-08-06|International Business Machines Corporation|Display screen with a graphical user interface for image-annotation classification|
US11244746B2|2017-08-04|2022-02-08|International Business Machines Corporation|Automatically associating user input with sections of an electronic report using machine learning|
US10783633B2|2018-04-25|2020-09-22|International Business Machines Corporation|Automatically linking entries in a medical image report to an image|
US10825173B2|2018-04-25|2020-11-03|International Business Machines Corporation|Automatically linking a description of pathology in a medical image report to an image|
US11222166B2|2019-11-19|2022-01-11|International Business Machines Corporation|Iteratively expanding concepts|
US11210508B2|2020-01-07|2021-12-28|International Business Machines Corporation|Aligning unlabeled images to surrounding text|
法律状态:
2016-09-13| B25D| Requested change of name of applicant approved|Owner name: KONINKLIJKE PHILIPS N.V. (NL) |
2016-09-27| B25G| Requested change of headquarter approved|Owner name: KONINKLIJKE PHILIPS N.V. (NL) |
2019-01-08| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2019-09-17| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2020-08-18| B06A| Notification to applicant to reply to the report for non-patentability or inadequacy of the application [chapter 6.1 patent gazette]|
2020-11-24| B09A| Decision: intention to grant [chapter 9.1 patent gazette]|
2021-02-02| B16A| Patent or certificate of addition of invention granted|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 29/03/2011, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
申请号 | 申请日 | 专利标题
US32564010P| true| 2010-04-19|2010-04-19|
US61/325,640|2010-04-19|
PCT/IB2011/051338|WO2011132097A2|2010-04-19|2011-03-29|Report viewer using radiological descriptors|
[返回顶部]